dynamic mismatch
- Asia > China > Shanghai > Shanghai (0.24)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Cross-Domain Policy Adaptation via Value-Guided Data Filtering
Generalizing policies across different domains with dynamics mismatch poses a significant challenge in reinforcement learning. For example, a robot learns the policy in a simulator, but when it is deployed in the real world, the dynamics of the environment may be different. Given the source and target domain with dynamics mismatch, we consider the online dynamics adaptation problem, in which case the agent can access sufficient source domain data while online interactions with the target domain are limited. Existing research has attempted to solve the problem from the dynamics discrepancy perspective. In this work, we reveal the limitations of these methods and explore the problem from the value difference perspective via a novel insight on the value consistency across domains. Specifically, we present the Value-Guided Data Filtering (VGDF) algorithm, which selectively shares transitions from the source domain based on the proximity of paired value targets across the two domains. Empirical results on various environments with kinematic and morphology shifts demonstrate that our method achieves superior performance compared to prior approaches.
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
Reinforcement learning has shown great promise for synthesizing realistic human behaviors by learning humanoid control policies from motion capture data. However, it is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions. The main difficulty lies in the dynamics mismatch between the humanoid model and real humans. That is, motions of real humans may not be physically possible for the humanoid model. To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.68)
- Information Technology > Artificial Intelligence > Robots (0.68)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada (0.04)
- Asia (0.04)
- Africa > Mali (0.04)
f76a89f0cb91bc419542ce9fa43902dc-AuthorFeedback.pdf
We'd like to first thank the reviewers for their constructive feedback. Here we aim to address the main questions raised by the reviewers. RFC policy they are analogous to the goals in DeepMimic. If we don't want the agent to go beyond its ability, then RFC could be extended to a scaffolding technique Also, as shown in the video, when the agent is forced to imitate demonstrations from other agents (e.g., Finally, for agent-object interaction, the RFs won't hinder learning since the policy can always learn The RFs are only applied to stabilize the agent without changing object contact. A: Since the motion synthesis baselines are deterministic, i.e., no diversity (we Besides, the design of the cV AE itself is not the focus of the paper and can be replaced by other models.
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > Canada (0.04)
- (2 more...)
- Asia > China > Shanghai > Shanghai (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)